Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Image Process ; 33: 2714-2729, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38557629

RESUMEN

Billions of people share images from their daily lives on social media every day. However, their biometric information (e.g., fingerprints) could be easily stolen from these images. The threat of fingerprint leakage from social media has created a strong desire to anonymize shared images while maintaining image quality, since fingerprints act as a lifelong individual biometric password. To guard the fingerprint leakage, adversarial attack that involves adding imperceptible perturbations to fingerprint images have emerged as a feasible solution. However, existing works of this kind are either weak in black-box transferability or cause the images to have an unnatural appearance. Motivated by the visual perception hierarchy (i.e., high-level perception exploits model-shared semantics that transfer well across models while low-level perception extracts primitive stimuli that result in high visual sensitivity when a suspicious stimulus is provided), we propose FingerSafe, a hierarchical perceptual protective noise injection framework to address the above mentioned problems. For black-box transferability, we inject protective noises into the fingerprint orientation field to perturb the model-shared high-level semantics (i.e., fingerprint ridges). Considering visual naturalness, we suppress the low-level local contrast stimulus by regularizing the response of the Lateral Geniculate Nucleus. Our proposed FingerSafe is the first to provide feasible fingerprint protection in both digital (up to 94.12%) and realistic scenarios (Twitter and Facebook, up to 68.75%). Our code can be found at https://github.com/nlsde-safety-team/FingerSafe.


Asunto(s)
Medios de Comunicación Sociales , Humanos , Dermatoglifia , Privacidad , Percepción Visual
2.
IEEE Trans Neural Netw Learn Syst ; 34(2): 677-689, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34370673

RESUMEN

Imitation learning from observation (LfO) is more preferable than imitation learning from demonstration (LfD) because of the nonnecessity of expert actions when reconstructing the expert policy from the expert data. However, previous studies imply that the performance of LfO is inferior to LfD by a tremendous gap, which makes it challenging to employ LfO in practice. By contrast, this article proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness. In the deterministic robot environment, from the perspective of the control theory, we show that the inverse dynamics disagreement between LfO and LfD approaches zero, meaning that LfO is almost equivalent to LfD. To further relax the deterministic constraint and better adapt to the practical environment, we consider bounded randomness in the robot environment and prove that the optimizing targets for both LfD and LfO remain almost the same in the more generalized setting. Extensive experiments for multiple robot tasks are conducted to demonstrate that LfO achieves comparable performance to LfD empirically. In fact, the most common robot systems in reality are the robot environment with bounded randomness (i.e., the environment this article considered). Hence, our findings greatly extend the potential of LfO and suggest that we can safely apply LfO in practice without sacrificing the performance compared to LfD.

3.
IEEE Trans Cybern ; 53(8): 5226-5239, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35976829

RESUMEN

Recently, deep neural networks have achieved promising performance for in-filling large missing regions in image inpainting tasks. They have usually adopted the standard convolutional architecture over the corrupted image, leading to meaningless contents, such as color discrepancy, blur, and other artifacts. Moreover, most inpainting approaches cannot handle well the case of a large contiguous missing area. To address these problems, we propose a generic inpainting framework capable of handling incomplete images with both contiguous and discontiguous large missing areas. We pose this in an adversarial manner, deploying regionwise operations in both the generator and discriminator to separately handle the different types of regions, namely, existing regions and missing ones. Moreover, a correlation loss is introduced to capture the nonlocal correlations between different patches, and thus, guide the generator to obtain more information during inference. With the help of regionwise generative adversarial mechanism, our framework can restore semantically reasonable and visually realistic images for both discontiguous and contiguous large missing areas. Extensive experiments on three widely used datasets for image inpainting task have been conducted, and both qualitative and quantitative experimental results demonstrate that the proposed model significantly outperforms the state-of-the-art approaches, on the large contiguous and discontiguous missing areas.

4.
IEEE Trans Image Process ; 31: 598-611, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34851825

RESUMEN

Adversarial examples are inputs with imperceptible perturbations that easily mislead deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to generate adversarial patches with strong generalization ability due to the ignorance of the inherent biases of models. In other words, the adversarial patches are always input-specific and fail to attack images from all classes or different models, especially unseen classes and black-box models. To address the problem, this paper proposes a bias-based framework to generate universal adversarial patches with strong generalization ability, which exploits the perceptual bias and attentional bias to improve the attacking ability. Regarding the perceptual bias, since DNNs are strongly biased towards textures, we exploit the hard examples which convey strong model uncertainties and extract a textural patch prior from them by adopting the style similarities. The patch prior is closer to decision boundaries and would promote attacks across classes. As for the attentional bias, motivated by the fact that different models share similar attention patterns towards the same image, we exploit this bias by confusing the model-shared similar attention patterns. Thus, the generated adversarial patches can obtain stronger transferability among different models. Taking Automatic Check-out (ACO) as the typical scenario, extensive experiments including white-box/black-box settings in both digital-world (RPC, the largest ACO related dataset) and physical-world scenario (Taobao and JD, the world's largest online shopping platforms) are conducted. Experimental results demonstrate that our proposed framework outperforms state-of-the-art adversarial patch attack methods.


Asunto(s)
Sesgo Atencional , Redes Neurales de la Computación , Incertidumbre
5.
IEEE Trans Image Process ; 30: 8955-8967, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34699360

RESUMEN

Adversarial images are imperceptible perturbations to mislead deep neural networks (DNNs), which have attracted great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them still failed to consider the robustness on common corruptions (e.g. noise, blur, and weather/digital effects). To address this problem, we propose a simple yet effective method, named Progressive Diversified Augmentation (PDA), which improves the robustness of DNNs by progressively injecting diverse adversarial noises during training. In other words, DNNs trained with PDA achieve better general robustness against both adversarial attacks and common corruptions than other strategies. In addition, PDA also enjoys the advantages of spending less training time and keeping high standard accuracy on clean examples. Further, we theoretically prove that PDA can control the perturbation bound and guarantee better robustness. Extensive results on CIFAR-10, SVHN, ImageNet, CIFAR-10-C and ImageNet-C have demonstrated that PDA comprehensively outperforms its counterparts on the robustness against adversarial examples and common corruptions as well as clean images. More experiments on the frequency-based perturbations and visualized gradients further prove that PDA achieves general robustness and is more aligned with the human visual system.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos
6.
IEEE Trans Image Process ; 30: 5769-5781, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34161231

RESUMEN

In practice, deep neural networks have been found to be vulnerable to various types of noise, such as adversarial examples and corruption. Various adversarial defense methods have accordingly been developed to improve adversarial robustness for deep models. However, simply training on data mixed with adversarial examples, most of these models still fail to defend against the generalized types of noise. Motivated by the fact that hidden layers play a highly important role in maintaining a robust model, this paper proposes a simple yet powerful training algorithm, named Adversarial Noise Propagation (ANP), which injects noise into the hidden layers in a layer-wise manner. ANP can be implemented efficiently by exploiting the nature of the backward-forward training style. Through thorough investigations, we determine that different hidden layers make different contributions to model robustness and clean accuracy, while shallow layers are comparatively more critical than deep layers. Moreover, our framework can be easily combined with other adversarial training methods to further improve model robustness by exploiting the potential of hidden layers. Extensive experiments on MNIST, CIFAR-10, CIFAR-10-C, CIFAR-10-P, and ImageNet demonstrate that ANP enables the strong robustness for deep models against both adversarial and corrupted ones, and also significantly outperforms various adversarial defense methods.

7.
IEEE Trans Image Process ; 30: 1291-1304, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33290221

RESUMEN

Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with imperceptible perturbations mislead DNNs to incorrect results. Despite the potential risk they bring, adversarial examples are also valuable for providing insights into the weakness and blind-spots of DNNs. Thus, the interpretability of a DNN in the adversarial setting aims to explain the rationale behind its decision-making process and makes deeper understanding which results in better practical applications. To address this issue, we try to explain adversarial robustness for deep models from a new perspective of neuron sensitivity which is measured by neuron behavior variation intensity against benign and adversarial examples. In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in the adversarial setting. Based on that, we further propose to improve adversarial robustness by stabilizing the behaviors of sensitive neurons. Moreover, we demonstrate that state-of-the-art adversarial training methods improve model robustness by reducing neuron sensitivities, which in turn confirms the strong connections between adversarial robustness and neuron sensitivity. Extensive experiments on various datasets demonstrate that our algorithm effectively achieves excellent results. To the best of our knowledge, we are the first to study adversarial robustness using neuron sensitivities.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Neurológicos , Algoritmos , Inteligencia Artificial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...